contextual importance and utility
Contextual Importance and Utility in Python: New Functionality and Insights with the py-ciu Package
The availability of easy-to-use and reliable software implementations is important for allowing researchers in academia and industry to test, assess and take into use eXplainable AI (XAI) methods. This paper describes the \texttt{py-ciu} Python implementation of the Contextual Importance and Utility (CIU) model-agnostic, post-hoc explanation method and illustrates capabilities of CIU that go beyond the current state-of-the-art that could be useful for XAI practitioners in general.
- Europe > Germany (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California (0.04)
- (7 more...)
Contextual Importance and Utility: aTheoretical Foundation
This paper provides new theory to support to the eXplainable AI (XAI) method Contextual Importance and Utility (CIU). CIU arithmetic is based on the concepts of Multi-Attribute Utility Theory, which gives CIU a solid theoretical foundation. The novel concept of contextual influence is also defined, which makes it possible to compare CIU directly with so-called additive feature attribution (AFA) methods for model-agnostic outcome explanation. One key takeaway is that the "influence" concept used by AFA methods is inadequate for outcome explanation purposes even for simple models to explain. Experiments with simple models show that explanations using contextual importance (CI) and contextual utility (CU) produce explanations where influence-based methods fail. It is also shown that CI and CU guarantees explanation faithfulness towards the explained model.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Sweden > Västerbotten County > Umeå (0.04)
- Europe > Germany (0.04)
- (3 more...)
Explanations of Black-Box Model Predictions by Contextual Importance and Utility
Anjomshoae, Sule, Främling, Kary, Najjar, Amro
The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and transparent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.
- Europe > Sweden > Västerbotten County > Umeå (0.04)
- Europe > France (0.04)
- North America > United States > California > Monterey County > Marina (0.04)
- Europe > Switzerland (0.04)
- Health & Medicine (0.68)
- Water & Waste Management > Solid Waste Management (0.46)
- Transportation > Air (0.43)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
- (2 more...)